19 research outputs found

    Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database

    Full text link
    Radiologists in their daily work routinely find and annotate significant abnormalities on a large number of radiology images. Such abnormalities, or lesions, have collected over years and stored in hospitals' picture archiving and communication systems. However, they are basically unsorted and lack semantic annotations like type and location. In this paper, we aim to organize and explore them by learning a deep feature representation for each lesion. A large-scale and comprehensive dataset, DeepLesion, is introduced for this task. DeepLesion contains bounding boxes and size measurements of over 32K lesions. To model their similarity relationship, we leverage multiple supervision information including types, self-supervised location coordinates and sizes. They require little manual annotation effort but describe useful attributes of the lesions. Then, a triplet network is utilized to learn lesion embeddings with a sequential sampling strategy to depict their hierarchical similarity structure. Experiments show promising qualitative and quantitative results on lesion retrieval, clustering, and classification. The learned embeddings can be further employed to build a lesion graph for various clinically useful applications. We propose algorithms for intra-patient lesion matching and missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde

    Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans

    Get PDF
    Abstract: Machine learning methods offer great promise for fast and accurate detection and prognostication of coronavirus disease 2019 (COVID-19) from standard-of-care chest radiographs (CXR) and chest computed tomography (CT) images. Many articles have been published in 2020 describing new machine learning-based models for both of these tasks, but it is unclear which are of potential clinical utility. In this systematic review, we consider all published papers and preprints, for the period from 1 January 2020 to 3 October 2020, which describe new machine learning models for the diagnosis or prognosis of COVID-19 from CXR or CT images. All manuscripts uploaded to bioRxiv, medRxiv and arXiv along with all entries in EMBASE and MEDLINE in this timeframe are considered. Our search identified 2,212 studies, of which 415 were included after initial screening and, after quality screening, 62 studies were included in this systematic review. Our review finds that none of the models identified are of potential clinical use due to methodological flaws and/or underlying biases. This is a major weakness, given the urgency with which validated COVID-19 models are needed. To address this, we give many recommendations which, if followed, will solve these issues and lead to higher-quality model development and well-documented manuscripts

    Learning and Indexing of Texture Descriptors for Content Based Image Retrieval in Medical Imaging Data

    No full text
    Abweichender Titel laut Übersetzung der Verfasserin/des VerfassersZsfassung in dt. SpracheMedizinische Bildgebungsverfahren gehören zu grundlegenden Instrumenten der klinischen Routine, die zur Erkennung und Überwachung von Krankheiten eingesetzt werden. Durch die Archivierung dieser Bilder entstehen große Datenbanken. Content Based Image Retrieval (CBIR) bietet eine Möglichkeit die Nutzbarkeit dieser Daten zu erhöhen. CBIR bezeichnet die Suche auf Basis von Bildinhalten bei der die Anfrage durch Bildinformation gestellt wird. Da der Ähnlichkeitsbegriff anwendungsspezifisch ist, werden auf Daten und Anwendungsziele angepasste Methoden benötigt. Die vorliegende Arbeit beschäftigt sich mit der Entwicklung und Evaluierung von Methoden, die für die Suche von dreidimensionalen medizinischen Bilddaten geeignet sind. Eine besondere Eigenschaft klinischer Daten ist die Verfügbarkeit von radiologischen Befunden, welche relevante visuelle Erscheinungen im Bild beschreiben können. Der erste Teil der Arbeit beschreibt die Entwicklung eines visuellen Deskriptors, der es ermöglicht, radiologisch relevante Unterschiede von Bildtexturen zu beschreiben. Während solche Deskriptoren in der Regel auf Basis rein visueller Information definiert werden, beschreiben wir eine Möglichkeit semantische Information in Form von Schlagwörtern mit visuellen Merkmalen zu verbinden. Der zweite Teil der Arbeit identifiziert und vergleicht geeignete Indizierungsmethoden um eine große Anzahl an Deskriptoren schnell vergleichen zu können. Der entwickelte Deskriptor als auch die Indizierungsmethoden werden mit Hilfe eines realen Datensatzes eines Krankenhauses evaluiert. Vier Anomalien werden als Beispiele für relevante Bildveränderungen definiert. Die Ergebnisse zeigen eine verbesserte Reihung relevanter Bilder für drei der vier Anomalien wenn semantische Information bei der Erstellung des Deskriptors verwendet wurde. Des weiteren werden die Effekte der approximativen Suchmethoden auf die Reihung relevanter Bilder Analysiert. Wir Zeigen, dass neben einer Erhöhung der Suchgeschwindigkeit auch der Speicherverbrauch um 98% gesenkt werden konnte ohne die Suchqualität deutlich zu mindern.Medical imaging is used for diagnosis and monitoring in the daily routine of hospitals. Up to thousands of images are recorded every day in just a single hospital. A way to increase the accessibility of such massive image databases is Content Based Image Retrieval (CBIR). Rather than text based search engines, CBIR uses actual visual information to find visual similar images for a query image. Visual similarity is a subjective terminology that is bound to a certain context. Thus, no standard design or methodology to solve a CBIR problem exists. The present work focuses on the development and evaluation of specialized CBIR components for three dimensional medical images. One part of the work deals with the development of a visual descriptor (a numerical representation to describe visual features) that allows to compare the similarity of images. Typically, such a descriptor is visual data driven. Medical images may be provided with textual information in the form of meta data or radiology reports. This information conveys semantic information as it indicates radiological relevant visual appearances. We propose a way to link visual features to such textual information. Besides the descriptors, a CBIR system needs an indexing method that enables fast nearest neighbor search functionality. Approximate nearest neighbor indexing methods are used to search the database of descriptor vectors. Real world data, recorded in the daily routine of a hospital is used to evaluate the components developed. Four anomalies that are visible in the lung are defined as information need. The results show, that semantic information improves the ranking of relevant images for three of the four anomalies. Effects of the approximate nearest neighbor search on the ranking quality are studied. Results show, that the use of approximate search methods can reduce the memory consumption by 98% without decreasing the retrieval quality considerably.11

    Prospects and challenges of radiomics by using nononcologic routine chest CT

    No full text
    Chest CT scans are one of the most common medical imaging procedures. The automatic extraction and quantification of imaging features may help in diagnosis, prognosis of, or treatment decision in cardiovascular, pulmonary, and metabolic diseases. However, an adequate sample size as a statistical necessity for radiomics studies is often difficult to achieve in prospective trials. By exploiting imaging data from clinical routine, a much larger amount of data could be used than in clinical trials. Still, there is only little literature on the implementation of radiomics in clinical routine chest CT scans. Reasons are heterogeneous CT scanning protocols and the resulting technical variability (eg, different slice thicknesses, reconstruction kernels or timings after contrast material administration) in routine CT imaging data. This review summarizes the recent state of the art of studies aiming to develop quantifiable imaging biomarkers at chest CT, such as for osteoporosis, chronic obstructive pulmonary disease, interstitial lung disease, and coronary artery disease. This review explains solutions to overcome heterogeneity in routine data such as the use of imaging repositories, the standardization of radiomic features, algorithmic approaches to improve feature stability, test-retest studies, and the evolution of deep learning for modeling radiomics features

    Unsupervised machine learning identifies predictive progression markers of IPF

    No full text
    Objectives To identify and evaluate predictive lung imaging markers and their pathways of change during progression of idiopathic pulmonary fibrosis (IPF) from sequential data of an IPF cohort. To test if these imaging markers predict outcome. Methods We studied radiological disease progression in 76 patients with IPF, including overall 190 computed tomography (CT) examinations of the chest. An algorithm identified candidates for imaging patterns marking progression by computationally clustering visual CT features. A classification algorithm selected clusters associated with radiological disease progression by testing their value for recognizing the temporal sequence of examinations. This resulted in radiological disease progression signatures, and pathways of lung tissue change accompanying progression observed across the cohort. Finally, we tested if the dynamics of marker patterns predict outcome, and performed an external validation study on a cohort from a different center. Results Progression marker patterns were identified and exhibited high stability in a repeatability experiment with 20 random sub-cohorts of the overall cohort. The 4 top-ranked progression markers were consistently selected as most informative for progression across all random sub-cohorts. After spatial image registration, local tracking of lung pattern transitions revealed a network of tissue transition pathways from healthy to a sequence of disease tissues. The progression markers were predictive for outcome, and the model achieved comparable results on a replication cohort. Conclusions Unsupervised learning can identify radiological disease progression markers that predict outcome. Local tracking of pattern transitions reveals pathways of radiological disease progression from healthy lung tissue through a sequence of diseased tissue types
    corecore